4 research outputs found
Average-Case Hardness of NP and PH from Worst-Case Fine-Grained Assumptions
What is a minimal worst-case complexity assumption that implies non-trivial average-case hardness of NP or PH? This question is well motivated by the theory of fine-grained average-case complexity and fine-grained cryptography. In this paper, we show that several standard worst-case complexity assumptions are sufficient to imply non-trivial average-case hardness of NP or PH:
- NTIME[n] cannot be solved in quasi-linear time on average if UP ? ? DTIME[2^{O?(?n)}].
- ??TIME[n] cannot be solved in quasi-linear time on average if ?_kSAT cannot be solved in time 2^{O?(?n)} for some constant k. Previously, it was not known if even average-case hardness of ??SAT implies the average-case hardness of ??TIME[n].
- Under the Exponential-Time Hypothesis (ETH), there is no average-case n^{1+?}-time algorithm for NTIME[n] whose running time can be estimated in time n^{1+?} for some constant ? > 0.
Our results are given by generalizing the non-black-box worst-case-to-average-case connections presented by Hirahara (STOC 2021) to the settings of fine-grained complexity. To do so, we construct quite efficient complexity-theoretic pseudorandom generators under the assumption that the nondeterministic linear time is easy on average, which may be of independent interest
Continuous LWE is as Hard as LWE & Applications to Learning Gaussian Mixtures
We show direct and conceptually simple reductions between the classical
learning with errors (LWE) problem and its continuous analog, CLWE (Bruna,
Regev, Song and Tang, STOC 2021). This allows us to bring to bear the powerful
machinery of LWE-based cryptography to the applications of CLWE. For example,
we obtain the hardness of CLWE under the classical worst-case hardness of the
gap shortest vector problem. Previously, this was known only under quantum
worst-case hardness of lattice problems. More broadly, with our reductions
between the two problems, any future developments to LWE will also apply to
CLWE and its downstream applications.
As a concrete application, we show an improved hardness result for density
estimation for mixtures of Gaussians. In this computational problem, given
sample access to a mixture of Gaussians, the goal is to output a function that
estimates the density function of the mixture. Under the (plausible and widely
believed) exponential hardness of the classical LWE problem, we show that
Gaussian mixture density estimation in with roughly
Gaussian components given samples requires time
quasi-polynomial in . Under the (conservative) polynomial hardness of LWE,
we show hardness of density estimation for Gaussians for any
constant , which improves on Bruna, Regev, Song and Tang (STOC
2021), who show hardness for at least Gaussians under polynomial
(quantum) hardness assumptions.
Our key technical tool is a reduction from classical LWE to LWE with
-sparse secrets where the multiplicative increase in the noise is only
, independent of the ambient dimension
MacORAMa: Optimal Oblivious RAM with Integrity
Oblivious RAM (ORAM), introduced by Goldreich and Ostrovsky (J. ACM `96), is a primitive that allows a client to perform RAM computations on an external database without revealing any information through the access pattern. For a database of size , well-known lower bounds show that a multiplicative overhead of in the number of RAM queries is necessary assuming client storage. A long sequence of works culminated in the asymptotically optimal construction of Asharov, Komargodski, Lin, and Shi (CRYPTO `21) with worst-case overhead and client storage. However, this optimal ORAM is known to be secure only in the honest-but-curious setting, where an adversary is allowed to observe the access patterns but not modify the contents of the database. In the malicious setting, where an adversary is additionally allowed to tamper with the database, this construction and many others in fact become insecure.
In this work, we construct the first maliciously secure ORAM with worst-case overhead and client storage assuming one-way functions, which are also necessary. By the lower bound, our construction is asymptotically optimal. To attain this overhead, we develop techniques to intricately interleave online and offline memory checking for malicious security. Furthermore, we complement our positive result by showing the impossibility of a generic overhead-preserving compiler from honest-but-curious to malicious security, barring a breakthrough in memory checking
The Non-hardness of Approximating Circuit Size
Abstract
The Minimum Circuit Size Problem (MCSP) has been the focus of intense study recently; MCSP is hard for SZK under rather powerful reductions (Allender and Das Inf. Comput. 256, 2β8, 2017), and is provably not hard under βlocalβ reductions computable in TIME(n0.49) (Murray and Williams Theory Comput. 13(1), 1β22, 2017). The question of whether MCSP is NP-hard (or indeed, hard even for small subclasses of P) under some of the more familiar notions of reducibility (such as many-one or Turing reductions computable in polynomial time or in AC0) is closely related to many of the longstanding open questions in complexity theory (Allender and Hirahara ACM Trans. Comput. Theory 11(4), 27:1β27:27, 2019; Allender et al. Comput. Complex. 26(2), 469β496, 2017; Hirahara and Santhanam 2017; Hirahara and Watanabe 2016; Hitchcock and Pavan 2015; Impagliazzo et al. 2018; Murray and Williams Theory Comput. 13(1), 1β22, 2017). All prior hardness results for MCSP hold also for computing somewhat weak approximations to the circuit complexity of a function (Allender et al. SIAM J. Comput. 35(6), 1467β1493, 2006; Allender and Das Inf. Comput. 256, 2β8, 2017; Allender et al. J. Comput. Syst. Sci. 77(1), 14β40, 2011; Hirahara and Santhanam 2017; Kabanets and Cai 2000; Rudow Inf. Process. Lett. 128, 1β4, 2017) (Subsequent to our work, a new hardness result has been announced (Ilango 2020) that relies on more exact size computations). Some of these results were proved by exploiting a connection to a notion of time-bounded Kolmogorov complexity (KT) and the corresponding decision problem (MKTP). More recently, a new approach for proving improved hardness results for MKTP was developed (Allender et al. SIAM J. Comput. 47(4), 1339β1372, 2018; Allender and Hirahara ACM Trans. Comput. Theory 11(4), 27:1β27:27, 2019), but this approach establishes only hardness of extremely good approximations of the form 1 + o(1), and these improved hardness results are not yet known to hold for MCSP. In particular, it is known that MKTP is hard for the complexity class DET under nonuniform
β€
m
A
C
0
reductions, implying MKTP is not in AC0[p] for any prime p (Allender and Hirahara ACM Trans. Comput. Theory 11(4), 27:1β27:27, 2019). It was still open if similar circuit lower bounds hold for MCSP (But see Golovnev et al. 2019; Ilango 2020). One possible avenue for proving a similar hardness result for MCSP would be to improve the hardness of approximation for MKTP beyond 1 + o(1) to Ο(1), as KT-complexity and circuit size are polynomially-related. In this paper, we show that this approach cannot succeed. More specifically, we prove that PARITY does not reduce to the problem of computing superlinear approximations to KT-complexity or circuit size via AC0-Turing reductions that make O(1) queries. This is significant, since approximating any set in P/poly AC0-reduces to just one query of a much worse approximation of circuit size or KT-complexity (Oliveira and Santhanam 2017). For weaker approximations, we also prove non-hardness under more powerful reductions. Our non-hardness results are unconditional, in contrast to conditional results presented in Allender and Hirahara (ACM Trans. Comput. Theory 11(4), 27:1β27:27, 2019) (for more powerful reductions, but for much worse approximations). This highlights obstacles that would have to be overcome by any proof that MKTP or MCSP is hard for NP under AC0 reductions. It may also be a step toward confirming a conjecture of Murray and Williams, that MCSP is not NP-complete under logtime-uniform
β€
m
A
C
0
reductions